463 research outputs found

    Quantitative Regular Expressions for Arrhythmia Detection Algorithms

    Full text link
    Motivated by the problem of verifying the correctness of arrhythmia-detection algorithms, we present a formalization of these algorithms in the language of Quantitative Regular Expressions. QREs are a flexible formal language for specifying complex numerical queries over data streams, with provable runtime and memory consumption guarantees. The medical-device algorithms of interest include peak detection (where a peak in a cardiac signal indicates a heartbeat) and various discriminators, each of which uses a feature of the cardiac signal to distinguish fatal from non-fatal arrhythmias. Expressing these algorithms' desired output in current temporal logics, and implementing them via monitor synthesis, is cumbersome, error-prone, computationally expensive, and sometimes infeasible. In contrast, we show that a range of peak detectors (in both the time and wavelet domains) and various discriminators at the heart of today's arrhythmia-detection devices are easily expressible in QREs. The fact that one formalism (QREs) is used to describe the desired end-to-end operation of an arrhythmia detector opens the way to formal analysis and rigorous testing of these detectors' correctness and performance. Such analysis could alleviate the regulatory burden on device developers when modifying their algorithms. The performance of the peak-detection QREs is demonstrated by running them on real patient data, on which they yield results on par with those provided by a cardiologist.Comment: CMSB 2017: 15th Conference on Computational Methods for Systems Biolog

    A Multiresolution Approach for Characterizing MFL Signatures from Gas Pipeline Inspections

    Full text link

    Iris classification based on sparse representations using on-line dictionary learning for large-scale de-duplication applications

    Get PDF
    De-duplication of biometrics is not scalable when the number of people to be enrolled into the biometric system runs into billions, while creating a unique identity for every person. In this paper, we propose an iris classification based on sparse representation of log-gabor wavelet features using on-line dictionary learning (ODL) for large-scale de-duplication applications. Three different iris classes based on iris fiber structures, namely, stream, flower, jewel and shaker, are used for faster retrieval of identities. Also, an iris adjudication process is illustrated by comparing the matched iris-pair images side-by-side to make the decision on the identification score using color coding. Iris classification and adjudication are included in iris de-duplication architecture to speed-up the identification process and to reduce the identification errors. The efficacy of the proposed classification approach is demonstrated on the standard iris database, UPOL

    Super-resolution far-field ghost imaging via compressive sampling

    Full text link
    Much more image details can be resolved by improving the system's imaging resolution and enhancing the resolution beyond the system's Rayleigh diffraction limit is generally called super-resolution. By combining the sparse prior property of images with the ghost imaging method, we demonstrated experimentally that super-resolution imaging can be nonlocally achieved in the far field even without looking at the object. Physical explanation of super-resolution ghost imaging via compressive sampling and its potential applications are also discussed.Comment: 4pages,4figure

    Fast, scalable, Bayesian spike identification for multi-electrode arrays

    Get PDF
    We present an algorithm to identify individual neural spikes observed on high-density multi-electrode arrays (MEAs). Our method can distinguish large numbers of distinct neural units, even when spikes overlap, and accounts for intrinsic variability of spikes from each unit. As MEAs grow larger, it is important to find spike-identification methods that are scalable, that is, the computational cost of spike fitting should scale well with the number of units observed. Our algorithm accomplishes this goal, and is fast, because it exploits the spatial locality of each unit and the basic biophysics of extracellular signal propagation. Human intervention is minimized and streamlined via a graphical interface. We illustrate our method on data from a mammalian retina preparation and document its performance on simulated data consisting of spikes added to experimentally measured background noise. The algorithm is highly accurate

    Wavelet-Based Detection of Outliers in Poisson INAR(1) Time Series

    Get PDF
    The presence of outliers or discrepant observations has a negative impact in time series modelling. This paper considers the problem of detecting outliers, additive or innovational, single, multiple or in patches, in count time series modelled by first-order Poisson integer-valued autoregressive, PoINAR(1), models. To address this problem, two wavelet-based approaches that allow the identification of the time points of outlier occurrence are proposed. The effectiveness of the proposed methods is illustrated with synthetic as well as with an observed dataset

    Tiling array data analysis: a multiscale approach using wavelets

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Tiling array data is hard to interpret due to noise. The wavelet transformation is a widely used technique in signal processing for elucidating the true signal from noisy data. Consequently, we attempted to denoise representative tiling array datasets for ChIP-chip experiments using wavelets. In doing this, we used specific wavelet basis functions, <it>Coiflets</it>, since their triangular shape closely resembles the expected profiles of true ChIP-chip peaks.</p> <p>Results</p> <p>In our wavelet-transformed data, we observed that noise tends to be confined to small scales while the useful signal-of-interest spans multiple large scales. We were also able to show that wavelet coefficients due to non-specific cross-hybridization follow a log-normal distribution, and we used this fact in developing a thresholding procedure. In particular, wavelets allow one to set an unambiguous, absolute threshold, which has been hard to define in ChIP-chip experiments. One can set this threshold by requiring a similar confidence level at different length-scales of the transformed signal. We applied our algorithm to a number of representative ChIP-chip data sets, including those of Pol II and histone modifications, which have a diverse distribution of length-scales of biochemical activity, including some broad peaks.</p> <p>Conclusions</p> <p>Finally, we benchmarked our method in comparison to other approaches for scoring ChIP-chip data using spike-ins on the ENCODE Nimblegen tiling array. This comparison demonstrated excellent performance, with wavelets getting the best overall score.</p
    corecore